Goto

Collaborating Authors

 kripke model


Comparing State-Representations for DEL Model Checking

Behnke, Gregor, Gattinger, Malvin, Ghosh, Avijeet, Wang, Haitian

arXiv.org Artificial Intelligence

Model checking with the standard Kripke models used in (Dynamic) Epistemic Logic leads to scalability issues. Hence alternative representations have been developed, in particular symbolic structures based on Binary Decision Diagrams (BDDs) and succinct models based on mental programs. While symbolic structures have been shown to perform well in practice, their theoretical complexity was not known so far. On the other hand, for succinct models model checking is known to be PSPACE-complete, but no implementations are available. We close this gap and directly relate the two representations. We show that model checking DEL on symbolic structures encoded with BDDs is also PSPACE-complete. In fact, already model checking Epistemic Logic without dynamics is PSPACE-complete on symbolic structures. We also provide direct translations between BDDs and mental programs. Both translations yield exponential outputs. For the translation from mental programs to BDDs we show that no small translation exists. For the other direction we conjecture the same.


Agentic System with Modal Logic for Autonomous Diagnostics

Sulc, Antonin, Hellert, Thorsten

arXiv.org Artificial Intelligence

The development of intelligent agents, particularly those powered by language models (LMs), has shown a critical role in various environments that require intelligent and autonomous decision-making. Environments are not passive testing grounds, and they represent the data required for agents to learn and exhibit in very challenging conditions that require adaptive, complex, and autonomous capacity to make decisions. While the paradigm of scaling models and datasets has led to remarkable emergent capabilities, we argue that scaling the structure, fidelity, and logical consistency of agent reasoning within these environments is a crucial, yet underexplored, dimension of AI research. This paper introduces a neuro-symbolic multi-agent architecture where the belief states of individual agents are formally represented as Kripke models. This foundational choice enables them to reason about known concepts of \emph{possibility} and \emph{necessity} using the formal language of modal logic. In this work, we use immutable, domain-specific knowledge to make an informed root cause diagnosis, which is encoded as logical constraints essential for proper, reliable, and explainable diagnosis. In the proposed model, we show constraints that actively guide the hypothesis generation of LMs, effectively preventing them from reaching physically or logically untenable conclusions. In a high-fidelity simulated particle accelerator environment, our system successfully diagnoses complex, cascading failures by combining the powerful semantic intuition of LMs with the rigorous, verifiable validation of modal logic and a factual world model and showcasing a viable path toward more robust, reliable, and verifiable autonomous agents.


From Neural Networks to Logical Theories: The Correspondence between Fibring Modal Logics and Fibring Neural Networks

Harzli, Ouns El, Grau, Bernardo Cuenca, Garcez, Artur d'Avila, Horrocks, Ian, Besold, Tarek R.

arXiv.org Artificial Intelligence

Fibring of modal logics is a well-established formalism for combining countable families of modal logics into a single fibred language with common semantics, characterized by fibred models. Inspired by this formalism, fibring of neural networks was introduced as a neurosymbolic framework for combining learning and reasoning in neural networks. Fibring of neural networks uses the (pre-)activations of a trained network to evaluate a fibring function computing the weights of another network whose outputs are injected back into the original network. However, the exact correspondence between fibring of neural networks and fibring of modal logics was never formally established. In this paper, we close this gap by formalizing the idea of fibred models \emph{compatible} with fibred neural networks. Using this correspondence, we then derive non-uniform logical expressiveness results for Graph Neural Networks (GNNs), Graph Attention Networks (GATs) and Transformer encoders. Longer-term, the goal of this paper is to open the way for the use of fibring as a formalism for interpreting the logical theories learnt by neural networks with the tools of computational logic.


Graph Learning via Logic-Based Weisfeiler-Leman Variants and Tabularization

Jaakkola, Reijo, Janhunen, Tomi, Kuusisto, Antti, Ortiz, Magdalena, Selin, Matias, Šimkus, Mantas

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) are a powerful solution in many settings where the data is naturally graph-structured; they enable us to learn not only from individual features of objects, but also from the connections between them [15, 20]. They make the graph topology a key part of the learning process, enabling more accurate results in a range of scenarios [19, 21]. However, graph learning is computationally expensive, and training GNNs is usually much slower than training for state of the art methods on tabular data [11, 5]. In this paper, we aim to achieve graph learning with simpler, more efficient methods for tabular data, after enriching graph nodes with information about the graph topology. Our stepping stone is the Weisfeiler-Leman algorithm (WL), a well-known technique for approximating graph isomorphism [18]. In addition to its many applications in graph theory and other areas of computer science, WL has been very influential in graph learning. It has been used in graph kernels, enabling algorithms such as support vector machines to operate directly on graphs [16, 11]. Crucially, it has been used to characterize the expressive power of several types of GNNs [8, 20, 1], suggesting WL as a suitable tool to distill the topological information that GNNs can learn from.


A Logic of General Attention Using Edge-Conditioned Event Models (Extended Version)

Belardinelli, Gaia, Bolander, Thomas, Watzl, Sebastian

arXiv.org Artificial Intelligence

In this work, we present the first general logic of attention. Attention is a powerful cognitive ability that allows agents to focus on potentially complex information, such as logically structured propositions, higher-order beliefs, or what other agents pay attention to. This ability is a strength, as it helps to ignore what is irrelevant, but it can also introduce biases when some types of information or agents are systematically ignored. Existing dynamic epistemic logics for attention cannot model such complex attention scenarios, as they only model attention to atomic formulas. Additionally, such logics quickly become cumbersome, as their size grows exponentially in the number of agents and announced literals. Here, we introduce a logic that overcomes both limitations. First, we generalize edge-conditioned event models, which we show to be as expressive as standard event models yet exponentially more succinct (generalizing both standard event models and generalized arrow updates). Second, we extend attention to arbitrary formulas, allowing agents to also attend to other agents' beliefs or attention. Our work treats attention as a modality, like belief or awareness. We introduce attention principles that impose closure properties on that modality and that can be used in its axiomatization. Throughout, we illustrate our framework with examples of AI agents reasoning about human attentional biases, demonstrating how such agents can discover attentional biases.


Graph neural networks and MSO

Ahvonen, Veeti, Heiman, Damian, Kuusisto, Antti

arXiv.org Artificial Intelligence

Graph neural networks (GNNs) are a deep learning architectu re for processing graph-structured data, and they have proven useful in various appl ications. They operate in labeled graphs and assign feature vectors (a vector of reals) to nodes in discrete rounds. Feature updates are carried out in each node by aggregating t he feature vectors of the node's neighbours into a single vector and then combining it with the node's own feature vector. The result we prove in this article concerns recurre nt graph neural networks; while a GNN typically updates node features for a constant number o f rounds, a recurrent GNN can perform updates for an unlimited number of rounds. Moreo ver, we consider two types of recurrent GNNs; ones that use real numbers ( GNN[ R ]s) and ones the use floating-point numbers ( GNN[F]s).


A process algebraic framework for multi-agent dynamic epistemic systems

Aldini, Alessandro

arXiv.org Artificial Intelligence

This paper combines the classical model of labeled transition systems with the epistemic model for reasoning about knowledge. The result is a unifying framework for modeling and analyzing multi-agent, knowledge-based, dynamic systems. On the modeling side, we propose a process algebraic, agent-oriented specification language that makes such a framework easy to use for practical purposes. On the verification side, we define a modal logic encompassing temporal and epistemic operators.


Logical Characterizations of Recurrent Graph Neural Networks with Reals and Floats

Ahvonen, Veeti, Heiman, Damian, Kuusisto, Antti, Lutz, Carsten

arXiv.org Artificial Intelligence

In pioneering work from 2019, Barcel\'o and coauthors identified logics that precisely match the expressive power of constant iteration-depth graph neural networks (GNNs) relative to properties definable in first-order logic. In this article, we give exact logical characterizations of recurrent GNNs in two scenarios: (1) in the setting with floating-point numbers and (2) with reals. For floats, the formalism matching recurrent GNNs is a rule-based modal logic with counting, while for reals we use a suitable infinitary modal logic, also with counting. These results give exact matches between logics and GNNs in the recurrent setting without relativising to a background logic in either case, but using some natural assumptions about floating-point arithmetic. Applying our characterizations, we also prove that, relative to graph properties definable in monadic second-order logic (MSO), our infinitary and rule-based logics are equally expressive. This implies that recurrent GNNs with reals and floats have the same expressive power over MSO-definable properties and shows that, for such properties, also recurrent GNNs with reals are characterized by a (finitary!) rule-based modal logic. In the general case, in contrast, the expressive power with floats is weaker than with reals. In addition to logic-oriented results, we also characterize recurrent GNNs, with both reals and floats, via distributed automata, drawing links to distributed computing models.


Communication Modalities

Kuznets, Roman

arXiv.org Artificial Intelligence

Epistemic analysis of distributed systems is one of the biggest successes among applications of logic in computer science. The reason for that is that agents' actions are necessarily guided by their knowledge. Thus, epistemic modal logic, with its knowledge and belief modalities (and group versions thereof), has played a vital role in establishing both impossibility results and necessary conditions for solvable distributed tasks. In distributed systems, knowledge is largely attained via communication. It has been standard in both distributed systems and dynamic epistemic logic to treat incoming messages as trustworthy, thus, creating difficulties in the epistemic analysis of byzantine distributed systems where faulty agents may lie. In this paper, we argue that handling such communication scenarios calls for additional modalities representing the informational content of messages that should not be taken at face value. We present two such modalities: hope for the case of fully byzantine agents and creed for non-uniform communication protocols in general.


Simplicial Models for the Epistemic Logic of Faulty Agents

Goubault, Eric, Kniazev, Roman, Ledent, Jeremy, Rajsbaum, Sergio

arXiv.org Artificial Intelligence

In recent years, several authors have been investigating simplicial models, a model of epistemic logic based on higher-dimensional structures called simplicial complexes. In the original formulation, simplicial models were always assumed to be pure, meaning that all worlds have the same dimension. This is equivalent to the standard S5n semantics of epistemic logic, based on Kripke models. By removing the assumption that models must be pure, we can go beyond the usual Kripke semantics and study epistemic logics where the number of agents participating in a world can vary. This approach has been developed in a number of papers, with applications in fault-tolerant distributed computing where processes may crash during the execution of a system. A difficulty that arises is that subtle design choices in the definition of impure simplicial models can result in different axioms of the resulting logic. In this paper, we classify those design choices systematically, and axiomatize the corresponding logics. We illustrate them via distributed computing examples of synchronous systems where processes may crash.